3 research outputs found
EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary Dynamic Vision Sensors
As an alternative sensing paradigm, dynamic vision sensors (DVS) have been
recently explored to tackle scenarios where conventional sensors result in high
data rate and processing time. This paper presents a hybrid event-frame
approach for detecting and tracking objects recorded by a stationary
neuromorphic sensor, thereby exploiting the sparse DVS output in a low-power
setting for traffic monitoring. Specifically, we propose a hardware efficient
processing pipeline that optimizes memory and computational needs that enable
long-term battery powered usage for IoT applications. To exploit the background
removal property of a static DVS, we propose an event-based binary image
creation that signals presence or absence of events in a frame duration. This
reduces memory requirement and enables usage of simple algorithms like median
filtering and connected component labeling for denoise and region proposal
respectively. To overcome the fragmentation issue, a YOLO inspired neural
network based detector and classifier to merge fragmented region proposals has
been proposed. Finally, a new overlap based tracker was implemented, exploiting
overlap between detections and tracks is proposed with heuristics to overcome
occlusion. The proposed pipeline is evaluated with more than 5 hours of traffic
recording spanning three different locations on two different neuromorphic
sensors (DVS and CeleX) and demonstrate similar performance. Compared to
existing event-based feature trackers, our method provides similar accuracy
while needing approx 6 times less computes. To the best of our knowledge, this
is the first time a stationary DVS based traffic monitoring solution is
extensively compared to simultaneously recorded RGB frame-based methods while
showing tremendous promise by outperforming state-of-the-art deep learning
solutions.Comment: 16 pages, 13 figure
Observations and Proposed Guidelines for Institutional & Academic Development from Students Point of View
<p>This report is about proposed Guidelines for Institutional framework from students point of view.</p
[In Press] A hybrid neuromorphic object tracking and classification framework for real-time systems
Deep learning inference that needs to largely take place on the “edge” is a highly computational and memory intensive workload, making it intractable for low-power, embedded platforms such as mobile nodes and remote security applications. To address this challenge, this article proposes a real-time, hybrid neuromorphic framework for object tracking and classification using event-based cameras that possess desirable properties such as low-power consumption (5–14 mW) and high dynamic range (120 dB). Nonetheless, unlike traditional approaches of using event-by-event processing, this work uses a mixed frame and event approach to get energy savings with high performance. Using a frame-based region proposal method based on the density of foreground events, a hardware-friendly object tracking scheme is implemented using the apparent object velocity while tackling occlusion scenarios. The frame-based object track input is converted back to spikes for TrueNorth (TN) classification via the energy-efficient deep network (EEDN) pipeline. Using originally collected datasets, we train the TN model on the hardware track outputs, instead of using ground truth object locations as commonly done, and demonstrate the ability of our system to handle practical surveillance scenarios. As an alternative tracker paradigm, we also propose a continuoustime tracker with C++ implementation where each event is processed individually, which better exploits the low latency and asynchronous nature of neuromorphic vision sensors. Subsequently, we extensively compare the proposed methodologies to state-of-the-art event-based and frame-based methods for object tracking and classification, and demonstrate the use case of our neuromorphic approach for real-time and embedded applications without sacrificing performance. Finally, we also showcase the efficacy of the proposed neuromorphic system to a standard RGB camera setup when simultaneously evaluated over several hours of traffic recordings